Concerning the poor privacy and flexibility of traditional lifetime estimation for human motion, a lifetime estimation system for human motion was proposed, by analyzing the amplitude variation of WiFi Channel State Information (CSI). In this system, the continuous and complex lifetime estimation problem was transformed into a discrete and simple human motion detection problem. Firstly, the CSI was collected with filtering out the outliers and noise. Secondly, Principal Component Analysis (PCA) was used to reduce the dimension of subcarriers, obtaining the principal components and the corresponding eigenvectors. Thirdly, the variance of principal components and the mean of first difference of eigenvectors were calculated, and a Back Propagation Neural Network (BPNN) model was trained with the ratio of above two parameters as eigenvalue. Fourthly, human motion detection was achieved by the trained BP neural network model, and the CSI data were divided into some segments with equal width when the human motion was detected. Finally, after the human motion detection being performed on all CSI segments, the human motion lifetime was estimated according to the number of CSI segments with human motion detected. In real indoor environment, the average accuracy of human motion detection can reach 97% and the error rate of human motion lifetime is less than 10%. The experimental results show that the proposed system can effectively estimate the lifetime of human motion.
Focusing on the problem that serious imbalance between abnormal data and normal data in anomaly detection will lead to performance degradation of decision tree, three improved methods for C4.5 decision tree were proposed, which are C4.5+δ, UDE (Uniform Distribution Entropy) and IDEF (Improved Distribution Entropy Function). Firstly, it was deduced that the attribute selection criterion of C4.5 tends to choose the ones with imbalanced splitting. Secondly, why imbalanced splitting decreases the accuracy of anomaly (minority) detection was analyzed. Thirdly, the attribute selection criterion-information gain ratio of C4.5 was improved by introducing relaxation factor and uniform distribution entropy, or substituting distribution entropy function. Finally, three improved decision trees were verified on WEKA platform and NSL-KDD dataset. Experimental results show that three proposed improved methods can increase the accuracy of anomaly detection. Compared with C4.5, the accuracies of C4.5+7, UDE and IDEF on KDDTest-21 dataset are improved by 3.16, 3.02 and 3.12 percentage points respectively, which are better than the methods using Rényi entropy or Tsallis entropy as splitting criterion. Furthermore, using improved decision trees to detect anomalies in the industrial control system can not only improve the recall ratio of anomalies, but also reduce false positive rate.
Samples are required to meet the manifold assumption in Spectral Embedded Clustering (SEC) algorithm, and class labels of samples can always be embedded in a linear space, which provides a new idea for spectral clustering of linearly separable data, but the linear mapping function used by the spectral embedded clustering algorithm is not available to process the nonlinear high-dimensional data. To solve this problem, this paper cored the linear mapping function, built a Spectral Embedded Clustering based on Kernel function (KSEC) model. This model can solve the problem that the linear mapping function can't deal with nonlinear data, as well as it can achieve kernel's dimension reduction synchronously. The experimental results on real data sets show that the improved algorithm can improve the clustering accuracy by 13.11% averagely, and the highest 31.62%, especially for high-dimensional data clustering accuracy can be increased by 16.53% on average. And the sensitive experiments on algorithm to parameters show the stability of the improved algorithm, so compared with traditional spectral clustering algorithms, higher accuracy and better clustering performance are obtained. And the method can be used for such complex image processing field as remote sensing image.
Concerning the huge calculation of sparse decomposition, a fast sparse decomposition algorithm with low computation complexity was proposed for first-order Polynomial Phase Signals (PPS). In this algorithm, firstly,two concatenate dictionaries including Df and Dp were constructed, and the atoms in the Df were constructed by the frequency, and the atoms in the Dp were constructed by the phase.Secondly, for the dictionary Df, the group testing was used to search the atoms that matched the signal, and the correlation values of the atoms and the signal were tested twice to achieve the reliability. Finally, according to the matching frequency atoms tested by group testing, the dictionary Dp was constructed, and the matching phase atoms were searched by Matching Pursuit (MP) algorithm. Therefore, the sparse decomposition of real first-order PPS was finished. The simulation results show that the computational efficiency of the proposed algorithm is about 604 times as high as that of matching pursuit and about 139 times as high as that of genetic algorithm, hence the presented algorithm has less computation complexity, and can finish sparse decomposition fast. The complexity of the algorithm is only O(N).
To solve the problem of large amount of calculation and nonlinear programming in the process of service composition optimization, a Cost Benefit Coefficient (CBC) approach was proposed for Web services composition reliability optimization in the situation of a given cost investment. First, the structure patterns of service composition and related reliability function were analyzed. Furthermore, the Web service composition method of reliability calculation was proposed and a nonlinear optimization model was established accordingly. And then the cost benefit coefficient was computed through the relationship between the cost and the reliability of component services, and the optimization schemes of Web service composition were decided. According to the nonlinear optimization model, the results of optimization were computed. Finally, given cost investment, the higher reliability of the approach to optimize the reliability of Web service composition was verified through the comparison of this approach and the traditional method on the reliable data of component service. The experimental results show that the proposed algorithm is effective and reasonable for reliability optimization of Web services composition.